BRUTE
Overview
The BRUTE function performs a brute-force grid search to find the global minimum of a mathematical function over a specified range. This approach systematically evaluates the objective function at every point on a multidimensional grid, making it effective for finding global minima in problems where gradient-based methods might get trapped in local minima.
This implementation wraps SciPy’s scipy.optimize.brute function from the SciPy optimization library. The algorithm constructs a grid of evaluation points by dividing each dimension into ns intervals between the specified bounds. For an n-dimensional problem, the total number of function evaluations is:
N_{\text{evaluations}} = ns^n
This exponential scaling means the method is best suited for low-dimensional problems (typically 2-4 variables) or when used with coarse grids to identify promising regions.
A key feature is the optional polishing step controlled by the finish parameter. After identifying the best gridpoint, a local optimization algorithm refines the result to find a more precise minimum. Available finishers include fmin (Nelder-Mead simplex method) and fmin_powell (Powell’s conjugate direction method). Setting finish to "none" skips this refinement and returns the raw gridpoint result.
The function accepts mathematical expressions using x[i] notation for variables (e.g., x[0], x[1]), supporting standard operations and functions including trigonometric functions (sin, cos, tan), exponentials (exp, log), and exponentiation using either ^ or ** notation.
For related global optimization methods, see basinhopping and differential_evolution in SciPy’s documentation.
This example function is provided as-is without any representation of accuracy.
Excel Usage
=BRUTE(func_expr, bounds, ns, finish)
func_expr(str, required): Mathematical expression using x[i] notation for variables, supporting standard mathematical functions and operators. Use ^ or ** for exponentiation.bounds(list[list], required): 2D list where each row contains [min, max] bounds for a variable. Number of rows determines the dimensionality of the problem.ns(int, optional, default: 20): Number of grid intervals along each dimension. Higher values increase accuracy but also computation time. Must be at least 2.finish(str, optional, default: “fmin”): Local optimization method to refine the best grid point. Use an empty string or “none” to skip refinement.
Returns (list[list]): 2D list [[x1, x2, …, xn, objective_value]], or error message string.
Examples
Example 1: Demo case 1
Inputs:
| func_expr | bounds | |
|---|---|---|
| (x[0]-1)^2 + (x[1]+2)^2 | -3 | 3 |
| -3 | 3 |
Excel formula:
=BRUTE("(x[0]-1)^2 + (x[1]+2)^2", {-3,3;-3,3})
Expected output:
| Result | ||
|---|---|---|
| 1 | -2 | 0 |
Example 2: Demo case 2
Inputs:
| func_expr | bounds | ns | finish | |
|---|---|---|---|---|
| (x[0]-2)^2 + (x[1]-1)^2 | -4 | 4 | 15 | none |
| -4 | 4 |
Excel formula:
=BRUTE("(x[0]-2)^2 + (x[1]-1)^2", {-4,4;-4,4}, 15, "none")
Expected output:
| Result | ||
|---|---|---|
| 2.2857142857142856 | 1.1428571428571423 | 0.10204081632653039 |
Example 3: Demo case 3
Inputs:
| func_expr | bounds | ns | |
|---|---|---|---|
| (x[0]+1)^2 + (x[1]-0.5)^2 + (x[2]-2)^2 | -2 | 2 | 12 |
| -2 | 2 | ||
| 0 | 4 |
Excel formula:
=BRUTE("(x[0]+1)^2 + (x[1]-0.5)^2 + (x[2]-2)^2", {-2,2;-2,2;0,4}, 12)
Expected output:
| Result | |||
|---|---|---|---|
| -1 | 0.5 | 2 | 0 |
Example 4: Demo case 4
Inputs:
| func_expr | bounds | ns | finish | |
|---|---|---|---|---|
| x[0]^2 + x[1]^2 | -5 | 5 | 10 | fmin_powell |
| -5 | 5 |
Excel formula:
=BRUTE("x[0]^2 + x[1]^2", {-5,5;-5,5}, 10, "fmin_powell")
Expected output:
| Result | ||
|---|---|---|
| 0 | 0 | 0 |
Python Code
import math
import re
import numpy as np
from scipy.optimize import brute as scipy_brute
from scipy.optimize import fmin as scipy_fmin
from scipy.optimize import fmin_powell as scipy_fmin_powell
# Pre-build safe_globals dictionary for expression evaluation (performance optimization)
_SAFE_GLOBALS = {
"math": math,
"np": np,
"numpy": np,
"__builtins__": {},
}
# Add all math module functions
_SAFE_GLOBALS.update({
name: getattr(math, name)
for name in dir(math)
if not name.startswith("_")
})
# Add common numpy/math function aliases
_SAFE_GLOBALS.update({
"sin": np.sin,
"cos": np.cos,
"tan": np.tan,
"asin": np.arcsin,
"arcsin": np.arcsin,
"acos": np.arccos,
"arccos": np.arccos,
"atan": np.arctan,
"arctan": np.arctan,
"sinh": np.sinh,
"cosh": np.cosh,
"tanh": np.tanh,
"exp": np.exp,
"log": np.log,
"ln": np.log,
"log10": np.log10,
"sqrt": np.sqrt,
"abs": np.abs,
"pow": np.power,
"pi": math.pi,
"e": math.e,
"inf": math.inf,
"nan": math.nan,
})
def brute(func_expr, bounds, ns=20, finish='fmin'):
"""
Perform a brute-force grid search to approximate the global minimum of a function.
See: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.brute.html
This example function is provided as-is without any representation of accuracy.
Args:
func_expr (str): Mathematical expression using x[i] notation for variables, supporting standard mathematical functions and operators. Use ^ or ** for exponentiation.
bounds (list[list]): 2D list where each row contains [min, max] bounds for a variable. Number of rows determines the dimensionality of the problem.
ns (int, optional): Number of grid intervals along each dimension. Higher values increase accuracy but also computation time. Must be at least 2. Default is 20.
finish (str, optional): Local optimization method to refine the best grid point. Use an empty string or "none" to skip refinement. Valid options: fmin, fmin_powell, None. Default is 'fmin'.
Returns:
list[list]: 2D list [[x1, x2, ..., xn, objective_value]], or error message string.
"""
# Validate func_expr
if not isinstance(func_expr, str):
return "Invalid input: func_expr must be a string."
if not re.search(r'\bx\b', func_expr):
return "Invalid input: func_expr must reference variable x (e.g., x[0])."
# Convert caret notation to Python exponentiation
func_expr = re.sub(r'\^', '**', func_expr)
# Validate ns first since it's used in slice construction
try:
ns = int(ns)
except (TypeError, ValueError):
return "Invalid input: ns must be an integer."
if ns < 2:
return "Invalid input: ns must be at least 2."
# Validate bounds
if not isinstance(bounds, list) or len(bounds) == 0:
return "Invalid input: bounds must be a 2D list of [min, max] pairs."
slices = []
for idx, bound in enumerate(bounds):
if not isinstance(bound, list) or len(bound) != 2:
return "Invalid input: each bound must be a [min, max] pair."
try:
lower = float(bound[0])
upper = float(bound[1])
except (TypeError, ValueError):
return "Invalid input: bounds must contain numeric values."
if lower > upper:
return f"Invalid input: lower bound must not exceed upper bound for variable index {idx}."
slices.append(slice(lower, upper, complex(0, ns)))
# Validate and normalize finish option
finish_normalized = finish
if isinstance(finish, str):
finish_normalized = finish.strip().lower()
finish_map = {
'fmin': scipy_fmin,
'fmin_powell': scipy_fmin_powell,
None: None,
'none': None,
'': None,
}
if finish_normalized not in finish_map:
return "Invalid input: finish must be 'fmin', 'fmin_powell', 'none', or an empty string."
finish_callable = finish_map[finish_normalized]
# Objective evaluation
def objective(x_vector):
"""Evaluate the objective function for a given input vector."""
try:
local_x = [float(val) for val in np.atleast_1d(x_vector)]
value = eval(func_expr, _SAFE_GLOBALS, {"x": local_x})
except Exception as exc:
raise ValueError(f"Error evaluating func_expr: {exc}")
try:
result_value = float(value)
except (TypeError, ValueError):
raise ValueError("Objective expression must return a scalar numeric value.")
if math.isnan(result_value):
raise ValueError("Objective evaluation produced NaN.")
if math.isinf(result_value):
raise ValueError("Objective evaluation produced infinity.")
return result_value
try:
result = scipy_brute(
objective,
ranges=slices,
Ns=ns,
full_output=True,
finish=finish_callable,
disp=False,
)
except ValueError as exc:
return f"Error during brute optimization: {str(exc)}"
except Exception as exc:
return f"Unexpected error during brute optimization: {str(exc)}"
# Validate result structure
if not isinstance(result, tuple) or len(result) < 2:
return "Brute optimization returned unexpected result structure."
solution_vector, objective_value = result[0], result[1]
try:
solution_list = [float(val) for val in np.atleast_1d(solution_vector)]
except (TypeError, ValueError):
return "Error converting solution vector to floats."
objective_value = float(objective_value)
return [solution_list + [objective_value]]